61 research outputs found

    Using locally weighted regression to estimate the functional size of software: a preliminary study

    Get PDF
    In software engineering, measuring software functional size via the IFPUG (International Function Point Users Group) Function Point Analysis using the standard manual process can be a long and expensive activity. To solve this problem, several early estimation methods have been proposed and have become de facto standard processes. Among these, a prominent one is High-level Function Point Analysis. Recently, the Simple Function Point method has been released by IFPUG; although it is a proper measurement method, it has a great level of convertibility to traditional Function Points and may be used as an estimation method. Both High-level Function Point Analysis and Simple Function Point skip the difficult and time-consuming activities needed to weight data and transaction functions. This makes the process faster and cheaper, but yields approximate measures. The accuracy of the mentioned method has been evaluated, also via large-scale empirical studies, showing that the yielded approximate measures are sufficiently accurate for practical usage. In this paper, locally weighted regression is applied to the problem outlined above. This empirical study shows that estimates obtained via locally weighted regression are more accurate than those obtained via High-level Function Point Analysis, but are not substantially better than those yielded by alternative estimation methods using linear regression. The Simple Function Point method appears to yield measures that are well correlated with those obtained via standard measurement. In conclusion, locally weighted regression appears to be effective and accurate enough for estimating software functional size

    Software development effort estimation using function points and simpler functional measures: a comparison

    Get PDF
    Background-Functional Size Measures are widely used for estimating the development effort of software. After the introduction of Function Points, a few "simplified"measures have been proposed, aiming to make measurement simpler and quicker, but also to make measures applicable when fully detailed software specifications are not yet available. It has been shown that, in general, software size measures expressed in Function Points do not support more accurate effort estimation with respect to simplified measures. Objective-Many practitioners believe that when considering "complex"projects, i.e., project that involve many complex transactions and data, traditional Function Points measures support more accurate estimates than simpler functional size measures that do not account for greater-Then-Average complexity. In this paper, we aim to produce evidence that confirms or disproves such belief. Method-Based on a dataset that contains both effort and size data, an empirical study is performed, to provide some evidence concerning the relations that link functional size (measured in different ways) and development effort. Results-Our analysis shows that there is no statistically significant evidence that Function Points are generally better at estimating more complex projects than simpler measures. Function Points appeared better in some specific conditions, but in those conditions they also performed worse than simpler measures when dealing with less complex projects. Conclusions-Traditional Function Points do not seem to effectively account for software complexity. To improve effort estimation, researchers should probably dedicate their effort to devise a way of measuring software complexity that can be used in effort models together with (traditional or simplified) functional size measures

    Machine learning in orthopedics: a literature review

    Get PDF
    In this paper we present the findings of a systematic literature review covering the articles published in the last two decades in which the authors described the application of a machine learning technique and method to an orthopedic problem or purpose. By searching both in the Scopus and Medline databases, we retrieved, screened and analyzed the content of 70 journal articles, and coded these resources following an iterative method within a Grounded Theory approach. We report the survey findings by outlining the articles\u2019 content in terms of the main machine learning techniques mentioned therein, the orthopedic application domains, the source data and the quality of their predictive performance

    Estimating functional size of software with confidence intervals

    Get PDF
    In many projects, software functional size is measured via the IFPUG (International Function Point Users Group) Function Point Analysis method. However, applying Function Point Analysis using the IFPUG process is possible only when functional user requirements are known completely and in detail. To solve this problem, several early estimation methods have been proposed and have become de facto standard processes. Among these, a prominent one is the ‘NESMA (Netherlands Software Metrics Association) estimated’ (also known as High-level Function Point Analysis) method. The NESMA estimated method simplifies the measurement by assigning fixed weights to Base Functional Components, instead of determining the weights via the detailed analysis of data and transactions. This makes the process faster and cheaper, and applicable when some details concerning data and transactions are not yet known. The accuracy of the mentioned method has been evaluated, also via large-scale empirical studies, showing that the yielded approximate measures are sufficiently accurate for practical usage. However, a limitation of the method is that it provides a specific size estimate, while other methods can provide confidence intervals, i.e., they indicate with a given confidence level that the size to be estimated is in a range. In this paper, we aim to enhance the NESMA estimated method with the possibility of computing a confidence interval. To this end, we carry out an empirical study, using data from real-life projects. The proposed approach appears effective. We expect that the possibility to estimate that the size of an application is in a range will help project managers deal with the risks connected with inevitable estimation errors

    IGV short scale to assess implicit value of visualizations through explicit interaction

    Get PDF
    This paper reports the assessment of the infographics-value (IGV) short scale, designed to measure the value in the use of infographics. The scale was made to assess the implicit quality dimensions of infographics. These dimensions were experienced during the execution of tasks in a contextualized scenario. Users were asked to retrieve a piece of information by explicitly interacting with the infographics. After usage, they were asked to rate quality dimensions of infographics, namely, usefulness, intuitiveness, clarity, informativity, and beauty; the overall value perceived from interacting with infographics was also included in the survey. Each quality dimension was coded as a six-point rating scale item, with overall value included. The proposed IGV short scale model was validated with 650 people. Our analysis confirmed that all considered dimensions in our scale were independently significant and contributed to assessing the implicit value of infographics. The IGV short scale is a lightweight but exhaustive tool to rapidly assess the implicit value of an explicit interaction with infographics in daily tasks, where value in use is crucial to measuring the situated effectiveness of visual tools

    Exploring Research through Design in Animal-Computer Interaction

    Get PDF
    This paper explores Research through Design (RtD) as a potential methodology for developing new interactive experiences for animals. We present an example study from an on-going project and examine whether RtD offers an appropriate framework for developing knowledge in the context of Animal-Computer Interaction, as well as considering how best to document such work. We discuss the design journey we undertook to develop interactive systems for captive elephants and the extent to which RtD has enabled us to explore concept development and documentation of research. As a result of our explorations, we propose that particular aspects of RtD can help ACI researchers gain fresh perspectives on the design of technology-enabled devices for non-human animals. We argue that these methods of working can support the investigation of particular and complex situations where no idiomatic interactions yet exist, where collaborative practice is desirable and where the designed objects themselves offer a conceptual window for future research and development

    KP-LAB Knowledge Practices Laboratory -- Specification of end-user applications

    Get PDF
    deliverablesThe present deliverable provides a high-level view on the new specifications of end user applications defined in the WPII during the M37-M46 period of the KP-Lab project. This is the last in the series of four deliverables that cover all the tools developed in the project, the previous ones being D6.1, D6.4 and D6.6. This deliverable presents specifications for the new functionalities for supporting the dedicated research studies defined in the latest revision of the KP-Lab research strategy. The tools addressed are: the analytic tools (Data export, Time-line-based analyser, Visual analyser), Clipboard, Search, Versioning of uploadable content items, Visual Model Editor (VME) and Visual Modeling Language Editor (VMLE). The main part of the deliverable provides the summary of tool specifications and the description of the Knowledge Practices Environment architecture, as well as an overview of the revised technical design process, of the toolsÂ’ relationship with the research studies, and of the driving objectives and the high-level requirements relevant for the present specifications. The full specifications of tools are provided in the annexes 1-9

    Defining positioning in a core ontology for robotics

    Get PDF
    Unambiguous definition of spatial position and orientation has crucial importance for robotics. In this paper we propose an ontology about positioning. It is part of a more extensive core ontology being developed by the IEEE RAS Working Group on ontologies for robotics and automation. The core ontology should provide a common ground for further ontology development in the field. We give a brief overview of concepts in the core ontology and then describe an integrated approach for representing quantitative and qualitative position information.3-7 November 201

    Data Work in a Knowledge-Broker Organization: How Cross-Organizational Data Maintenance shapes Human Data Interactions.

    Get PDF

    Techno-ecology of gender: the feminist studies in computer science, organizations and design of technologies

    No full text
    This paper is an overview of the feminist studies in computer science and technologies and, in particular, on the feminine perspectives of the relational foundations of reality, of an "embodied knowledge", which mediate the interactions with and the interpretations of the world. The feminist studies about the sociality with objects, the interactions in socio-technical systems, the new materialism, and a technology design more oriented to situated practices become complementary stances to the strong, abstract, rational and deterministic computer science, and critical tools in the relations between masculine and feminine in the organizations and with the everyday technologies
    • …
    corecore